Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 31
Filter
1.
Proceedings of SPIE - The International Society for Optical Engineering ; 12602, 2023.
Article in English | Scopus | ID: covidwho-20245409

ABSTRACT

Nowadays, with the outbreak of COVID-19, the prevention and treatment of COVID-19 has gradually become the focus of social disease prevention, and most patients are also more concerned about the symptoms. COVID-19 has symptoms similar to the common cold, and it cannot be diagnosed based on the symptoms shown by the patient, so it is necessary to observe medical images of the lungs to finally determine whether they are COVID-19 positive. As the number of patients with symptoms similar to pneumonia increases, more and more medical images of the lungs need to be generated. At the same time, the number of physicians at this stage is far from meeting the needs of patients, resulting in patients unable to detect and understand their own conditions in time. In this regard, we have performed image augmentation, data cleaning, and designed a deep learning classification network based on the data set of COVID-19 lung medical images. accurate classification judgment. The network can achieve 95.76% classification accuracy for this task through a new fine-tuning method and hyperparameter tuning we designed, which has higher accuracy and less training time than the classic convolutional neural network model. © 2023 SPIE.

2.
Proceedings - 2022 2nd International Symposium on Artificial Intelligence and its Application on Media, ISAIAM 2022 ; : 135-139, 2022.
Article in English | Scopus | ID: covidwho-20236902

ABSTRACT

Deep learning (DL) approaches for image segmentation have been gaining state-of-the-art performance in recent years. Particularly, in deep learning, U-Net model has been successfully used in the field of image segmentation. However, traditional U-Net methods extract features, aggregate remote information, and reconstruct images by stacking convolution, pooling, and up sampling blocks. The traditional approach is very inefficient due of the stacked local operators. In this paper, we propose the multi-attentional U-Net that is equipped with non-local blocks based self-attention, channel-attention, and spatial-attention for image segmentation. These blocks can be inserted into U-Net to flexibly aggregate information on the plane and spatial scales. We perform and evaluate the multi-attentional U-Net model on three benchmark data sets, which are COVID-19 segmentation, skin cancer segmentation, thyroid nodules segmentation. Results show that our proposed models achieve better performances with faster computation and fewer parameters. The multi-attention U-Net can improve the medical image segmentation results. © 2022 IEEE.

3.
Computers & Security ; : 103318, 2023.
Article in English | ScienceDirect | ID: covidwho-20231161

ABSTRACT

Cyber-attacks cause huge monetary losses to the institutions that are victims of them. Cyber-attack is becoming increasingly sophisticated. Therefore, the protection system against cyber-attacks has become a highly requested resource by any type of state or private institution. During the pandemic caused by COVID-19, the number of cyber-attacks against both public and private health institutions has increased. Cybersecurity systems have become a necessity. Various protection systems have been proposed using different machine learning algorithms, but deep learning consistently provides the best results. In this work we develop a deep learning model for the detection of different kinds of cyber-attacks, a study is carried out on the relevance of the selection of features in this type of algorithm and the importance of attention mechanisms is analyzed to improve the assessment of features within the same model. We have carried out the experiments using two datasets that are benchmarks in the field of cybersecurity and we have carried out a comparative study with both.

4.
Biomedical Signal Processing and Control ; 85:105079, 2023.
Article in English | ScienceDirect | ID: covidwho-20230656

ABSTRACT

Combining transformers and convolutional neural networks is considered one of the most important directions for tackling medical image segmentation problems. To learn the long-range dependencies and local contexts, previous approaches embedded a convolutional layer into feedforward neural network inside the transformer block. However, a common issue is the instability during training since large differences in amplitude across layers by pre-layer normalization. Furthermore, multi-scale features were directly fused using the transformer from the encoder to decoder, which could disrupt valuable information for segmentation. To address these concerns, we propose Advanced TransFormer (ATFormer), a novel hybrid architecture that combines convolutional neural networks and transformers for medical image segmentation. First, the traditional transformer block has been refined into an Advanced Transformer Block, which adopts post-layer normalization to obtain mild activation values and employs the scaled cosine attention with shifted window for accurate spatial information. Second, the Progressive Guided Fusion module is introduced to make multi-scale features more discriminative while reducing the computational complexity. Experimental results on the ACDC, COVID-19 CT-Seg, and Tumor datasets demonstrate the significant advantage of ATFormer over existing methods that rely solely on convolutional neural networks, transformers, or their combination.

5.
J King Saud Univ Comput Inf Sci ; 35(7): 101596, 2023 Jul.
Article in English | MEDLINE | ID: covidwho-2328320

ABSTRACT

COVID-19 is a contagious disease that affects the human respiratory system. Infected individuals may develop serious illnesses, and complications may result in death. Using medical images to detect COVID-19 from essentially identical thoracic anomalies is challenging because it is time-consuming, laborious, and prone to human error. This study proposes an end-to-end deep-learning framework based on deep feature concatenation and a Multi-head Self-attention network. Feature concatenation involves fine-tuning the pre-trained backbone models of DenseNet, VGG-16, and InceptionV3, which are trained on a large-scale ImageNet, whereas a Multi-head Self-attention network is adopted for performance gain. End-to-end training and evaluation procedures are conducted using the COVID-19_Radiography_Dataset for binary and multi-classification scenarios. The proposed model achieved overall accuracies (96.33% and 98.67%) and F1_scores (92.68% and 98.67%) for multi and binary classification scenarios, respectively. In addition, this study highlights the difference in accuracy (98.0% vs. 96.33%) and F_1 score (97.34% vs. 95.10%) when compared with feature concatenation against the highest individual model performance. Furthermore, a virtual representation of the saliency maps of the employed attention mechanism focusing on the abnormal regions is presented using explainable artificial intelligence (XAI) technology. The proposed framework provided better COVID-19 prediction results outperforming other recent deep learning models using the same dataset.

6.
Engineering Applications of Artificial Intelligence ; 122, 2023.
Article in English | Web of Science | ID: covidwho-2310316

ABSTRACT

Vision Transformers (ViTs), with the magnificent potential to unravel the information contained within images, have evolved as one of the most contemporary and dominant architectures that are being used in the field of computer vision. These are immensely utilized by plenty of researchers to perform new as well as former experiments. Here, in this article, we investigate the intersection of vision transformers and medical images. We proffered an overview of various ViT based frameworks that are being used by different researchers to decipher the obstacles in medical computer vision. We surveyed the applications of Vision Transformers in different areas of medical computer vision such as image-based disease classification, anatomical structure segmentation, registration, region-based lesion detection, captioning, report generation, and reconstruction using multiple medical imaging modalities that greatly assist in medical diagnosis and hence treatment process. Along with this, we also demystify several imaging modalities used in medical computer vision. Moreover, to get more insight and deeper understanding, the self-attention mechanism of transformers is also explained briefly. Conclusively, the ViT based solutions for each image analytics task are critically analyzed, open challenges are discussed and the pointers to possible solutions for future direction are deliberated. We hope this review article will open future research directions for medical computer vision researchers.

7.
Lecture Notes on Data Engineering and Communications Technologies ; 156:505-514, 2023.
Article in English | Scopus | ID: covidwho-2298717

ABSTRACT

Clinical diagnosis based on computed tomography (CT) could be used, as part of diagnosis standard of COVID-19 pneumonia. Addressing the problem that accuracy of CT-based traditional pneumonia classification diagnosis models is relatively low when employed for classification of community-acquired pneumonia (CP), COVID-19 pneumonia (NCP) and normal cases, a new network model is proposed which combines application of Swin Transformer and multi-head axial self-attention (MASA) mechanism, to analyze CT images and make intelligence-assisted diagnosis. The method in detail is to partially replace traditional multi-head self-attention (MSA) mechanism in encoders of Swin Transformer by MASA. The improved model is applied to train and test on commonly used pneumonia CT dataset CC-CCII. The results show that the proposed network outperforms traditional networks ResNet50 and Vision Transformer in indicators of accuracy, sensitivity and F1-measure. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

8.
Expert Systems with Applications ; 224, 2023.
Article in English | Scopus | ID: covidwho-2297620

ABSTRACT

This study aims to estimate the prices in the next 24 h with deep learning methods in the Turkish electricity market. The model is based on hourly data for the period 2017–2021 using electricity prices. The model's Root Mean Square Error (RMSE) value is 3.14, and the explanatory power R2 is 0.94. Since this model also considers the subgroups in the database, it can make price predictions for the pandemic period. To test the robustness and consistency of the model, twelve RNN-based models were re-estimated with the same data set. Although all models successfully predict the prices, The TEDSE Model performs better than the others. This study will be especially beneficial to electricity market players and policymakers. In further studies, the TEDSE model can be used for price prediction in intraday energy markets. This study's most important contribution is methodology innovation, using the Transformer Encoder-Decoder with Self-Attention (TEDSE) model for the first time to estimate electricity prices. © 2023 Elsevier Ltd

9.
IEEE Open J Eng Med Biol ; 4: 21-30, 2023.
Article in English | MEDLINE | ID: covidwho-2303141

ABSTRACT

Goal: To investigate whether a deep learning model can detect Covid-19 from disruptions in the human body's physiological (heart rate) and rest-activity rhythms (rhythmic dysregulation) caused by the SARS-CoV-2 virus. Methods: We propose CovidRhythm, a novel Gated Recurrent Unit (GRU) Network with Multi-Head Self-Attention (MHSA) that combines sensor and rhythmic features extracted from heart rate and activity (steps) data gathered passively using consumer-grade smart wearable to predict Covid-19. A total of 39 features were extracted (standard deviation, mean, min/max/avg length of sedentary and active bouts) from wearable sensor data. Biobehavioral rhythms were modeled using nine parameters (mesor, amplitude, acrophase, and intra-daily variability). These features were then input to CovidRhythm for predicting Covid-19 in the incubation phase (one day before biological symptoms manifest). Results: A combination of sensor and biobehavioral rhythm features achieved the highest AUC-ROC of 0.79 [Sensitivity = 0.69, Specificity = 0.89, F[Formula: see text] = 0.76], outperforming prior approaches in discriminating Covid-positive patients from healthy controls using 24 hours of historical wearable physiological. Rhythmic features were the most predictive of Covid-19 infection when utilized either alone or in conjunction with sensor features. Sensor features predicted healthy subjects best. Circadian rest-activity rhythms that combine 24 h activity and sleep information were the most disrupted. Conclusions: CovidRhythm demonstrates that biobehavioral rhythms derived from consumer-grade wearable data can facilitate timely Covid-19 detection. To the best of our knowledge, our work is the first to detect Covid-19 using deep learning and biobehavioral rhythms features derived from consumer-grade wearable data.

10.
8th IEEE International Conference on Cloud Computing and Intelligence Systems, CCIS 2022 ; : 464-468, 2022.
Article in English | Scopus | ID: covidwho-2269352

ABSTRACT

In this paper, we propose a new novel coronavirus pneumonia image classification model based on the combination of Transformer and convolutional network(VQ-ViCNet), and present a vector quantization feature enhancement module for the inconspicuous characteristics of lung medical image data. This model extracts the local latent layer features of the image through the convolutional network, and learns the deep global features of the image data through the Transformer's multi-head self attention algorithm. After the calculation of convolution and attention, the features learned by the Transformer Encoder are enhanced by the vector quantization feature enhancement module and able to better complete the final downstream tasks. This model performs better than convolutional architectures, pure attention architectures and generative models on all 6 public datasets. © 2022 IEEE.

11.
Neural Comput Appl ; 35(18): 13503-13527, 2023.
Article in English | MEDLINE | ID: covidwho-2263231

ABSTRACT

Covid text identification (CTI) is a crucial research concern in natural language processing (NLP). Social and electronic media are simultaneously adding a large volume of Covid-affiliated text on the World Wide Web due to the effortless access to the Internet, electronic gadgets and the Covid outbreak. Most of these texts are uninformative and contain misinformation, disinformation and malinformation that create an infodemic. Thus, Covid text identification is essential for controlling societal distrust and panic. Though very little Covid-related research (such as Covid disinformation, misinformation and fake news) has been reported in high-resource languages (e.g. English), CTI in low-resource languages (like Bengali) is in the preliminary stage to date. However, automatic CTI in Bengali text is challenging due to the deficit of benchmark corpora, complex linguistic constructs, immense verb inflexions and scarcity of NLP tools. On the other hand, the manual processing of Bengali Covid texts is arduous and costly due to their messy or unstructured forms. This research proposes a deep learning-based network (CovTiNet) to identify Covid text in Bengali. The CovTiNet incorporates an attention-based position embedding feature fusion for text-to-feature representation and attention-based CNN for Covid text identification. Experimental results show that the proposed CovTiNet achieved the highest accuracy of 96.61±.001% on the developed dataset (BCovC) compared to the other methods and baselines (i.e. BERT-M, IndicBERT, ELECTRA-Bengali, DistilBERT-M, BiLSTM, DCNN, CNN, LSTM, VDCNN and ACNN).

12.
Int J Mach Learn Cybern ; : 1-15, 2022 Oct 19.
Article in English | MEDLINE | ID: covidwho-2287507

ABSTRACT

Since the emergence of the novel coronavirus in December 2019, it has rapidly swept across the globe, with a huge impact on daily life, public health and the economy around the world. There is an urgent necessary for a rapid and economical detection method for the Covid-19. In this study, we used the transformers-based deep learning method to analyze the chest X-rays of normal, Covid-19 and viral pneumonia patients. Covid-Vision-Transformers (CovidViT) is proposed to detect Covid-19 cases through X-ray images. CovidViT is based on transformers block with the self-attention mechanism. In order to demonstrate its superiority, this research is also compared with other popular deep learning models, and the experimental result shows CovidViT outperforms other deep learning models and achieves 98.0% accuracy on test set, which means that the proposed model is excellent in Covid-19 detection. Besides, an online system for quick Covid-19 diagnosis is built on http://yanghang.site/covid19.

13.
Biomedical Signal Processing and Control ; 79, 2023.
Article in English | Scopus | ID: covidwho-2243008

ABSTRACT

Lung cancer is the uncontrolled growth of abnormal cells in one or both lungs. This is one of the dangerous diseases. A lot of feature extraction with classification methods were discussed previously regarding this disease, but none of the methods give sufficient results, not only that, those methods have high over fitting problem, as a result, the detection accuracy was minimizing. Therefore, to overcome these issues, a Lung Disease Detection using Self-Attention Generative Adversarial Capsule Network optimized with Sun flower Optimization Algorithm (SA-Caps GAN-SFOA-LDC) is proposed in this manuscript. Initially, NIH chest X-ray image dataset is gathered through Kaggle repository to diagnose the lung disease. Then, the chests X-ray images are pre-processed by using the contrast limited adaptive histogram equalization (CLAHE) filtering method to eliminate the noise and to enhance the image quality. These pre-processed outputs are fed to feature extraction process. In the feature extraction process, the empirical wavelet transform method is used. These extracted features are given into Self-Attention based Generative Adversarial Capsule classifier for detecting the lung disease. The hyper parameters of SA-Caps GAN classifier is optimized using Sun flower Optimization Algorithm. The simulation is implemented in MATLAB. The proposed SA-Caps GAN-SFOA-LDC method attains higher accuracy 21.05%, 33.28%, 30.27%, 29.68%, 32.57% and 44.28%, Higher Precision 30.24%, 35.68%, 32.08%, 41.27%, 28.57% and 34.20%, Higher F-Score 32.05%, 31.05%, 36.24%, 30.27%, 37.59% and 22.05% analyzed with the existing methods, SVM-SMO-LDC, CNN-MOSHO-LDC, XGboost-PSO-LDC respectively. © 2022 Elsevier Ltd

14.
Electronics (Switzerland) ; 12(1), 2023.
Article in English | Scopus | ID: covidwho-2239704

ABSTRACT

In recent years, chest X-ray (CXR) imaging has become one of the significant tools to assist in the diagnosis and treatment of novel coronavirus pneumonia. However, CXR images have complex-shaped and changing lesion areas, which makes it difficult to identify novel coronavirus pneumonia from the images. To address this problem, a new deep learning network model (BoT-ViTNet) for automatic classification is designed in this study, which is constructed on the basis of ResNet50. First, we introduce multi-headed self-attention (MSA) to the last Bottleneck block of the first three stages in the ResNet50 to enhance the ability to model global information. Then, to further enhance the feature expression performance and the correlation between features, the TRT-ViT blocks, consisting of Transformer and Bottleneck, are used in the final stage of ResNet50, which improves the recognition of complex lesion regions in CXR images. Finally, the extracted features are delivered to the global average pooling layer for global spatial information integration in a concatenated way and used for classification. Experiments conducted on the COVID-19 Radiography database show that the classification accuracy, precision, sensitivity, specificity, and F1-score of the BoT-ViTNet model is 98.91%, 97.80%, 98.76%, 99.13%, and 98.27%, respectively, which outperforms other classification models. The experimental results show that our model can classify CXR images better. © 2022 by the authors.

15.
Neurocomputing ; 518: 496-506, 2023 Jan 21.
Article in English | MEDLINE | ID: covidwho-2240911

ABSTRACT

With the global outbreak of COVID-19, wearing face masks has been actively introduced as an effective public measure to reduce the risk of virus infection. This measure leads to the failure of face recognition in many cases. Therefore, it is very necessary to improve the recognition performance of masked face recognition (MFR). Inspired by the successful application of self-attention in computer vision, we propose a Convolutional Visual Self-Attention Network (CVSAN), which uses self-attention to augment the convolution operator. Specifically, this is achieved by connecting a convolutional feature map, which enforces local features, to a self-attention feature map that is capable of modeling long-range dependencies. Since there is currently no publicly available large-scale masked face data, we generate a Masked VGGFace2 dataset based on the face detection algorithm to train the CVSAN model. Experiments show that the CVSAN algorithm significantly improves the performance of MFR compared to other algorithms.

16.
Comput Biol Med ; 155: 106633, 2023 03.
Article in English | MEDLINE | ID: covidwho-2228832

ABSTRACT

For medical image retrieval task, deep hashing algorithms are widely applied in large-scale datasets for auxiliary diagnosis due to the retrieval efficiency advantage of hash codes. Most of which focus on features learning, whilst neglecting the discriminate area of medical images and hierarchical similarity for deep features and hash codes. In this paper, we tackle these dilemmas with a new Multi-scale Triplet Hashing (MTH) algorithm, which can leverage multi-scale information, convolutional self-attention and hierarchical similarity to learn effective hash codes simultaneously. The MTH algorithm first designs multi-scale DenseBlock module to learn multi-scale information of medical images. Meanwhile, a convolutional self-attention mechanism is developed to perform information interaction of the channel domain, which can capture the discriminate area of medical images effectively. On top of the two paths, a novel loss function is proposed to not only conserve the category-level information of deep features and the semantic information of hash codes in the learning process, but also capture the hierarchical similarity for deep features and hash codes. Extensive experiments on the Curated X-ray Dataset, Skin Cancer MNIST Dataset and COVID-19 Radiography Dataset illustrate that the MTH algorithm can further enhance the effect of medical retrieval compared to other state-of-the-art medical image retrieval algorithms.


Subject(s)
COVID-19 , Skin Neoplasms , Humans , Algorithms , Learning , Semantics
17.
6th International Conference on Computer Science and Application Engineering, CSAE 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2194123

ABSTRACT

Over the past two years, COVID-19 has led to a widespread rise in online education, and knowledge tracing has been used on various educational platforms. However, most existing knowledge tracing models still suffer from long-term dependence. To address this problem, we propose a Multi-head ProbSparse Self-Attention for Knowledge Tracing(MPSKT). Firstly, the temporal convolutional network is used to encode the position information of the input sequence. Then, the Multi-head ProbSparse Self-Attention in the encoder and decoder blocks is used to capture the relationship between the input sequences, and the convolution and pooling layers in the encoder block are used to shorten the length of the input sequence, which greatly reduces the time complexity of the model and better solves the problem of long-term dependence of the model. Finally, experimental results on three public online education datasets demonstrate the effectiveness of our proposed model. © 2022 Association for Computing Machinery.

18.
16th IEEE International Conference on Signal Processing, ICSP 2022 ; 2022-October:468-473, 2022.
Article in English | Scopus | ID: covidwho-2191931

ABSTRACT

Mortality prediction is a crucial challenge because of multivariate time series (MTS) complexity, which are sparse, irregularly, asynchronous and hold missing values for various reasons in a single acquisition. Various methods are proposed to deal with missing values for the final mortality prediction. However, existing models only capture the temporal dependencies within a time series and are inefficient to capture the dependencies between time series to rebuild missing values for mortality prediction. To address these challenges, in this paper, we present an end-to-end imputation and mortality prediction model, named bidirectional coupled and Gumbel subset network (BiCGSN), for mortality prediction with such irregularly multivariate time series. Our proposed model (BiCGSN) uses a recurrent network to learn the temporal dependencies (intra-time series couplings) and uses a Gumbel selector on multi-head attention to obtain the relationship between the variables (inter-time series couplings) in the forward and backward directions. Then the learned bidirectional inter-and intra-time series couplings are fused to impute missing values for further mortality prediction. We evaluate our model on PhysioNet2012 and COVID-19 datasets to imputation and predict mortality. Experiments show that BiCGSN obtains the AUC 0.869 and 0.911 on two real-world datasets respectively and outperforms all the baselines. © 2022 IEEE.

19.
3rd International Conference on Intelligent Computing, Instrumentation and Control Technologies, ICICICT 2022 ; : 1647-1652, 2022.
Article in English | Scopus | ID: covidwho-2136268

ABSTRACT

Covid-19, the most infectious ailment effected due to severe acute respiratory syndrome, which has hindered the health of the people worldwide by causing severe respiratory problems and also lead to extent of death. This infectious syndrome needs to be monitored and detected at right time to prevent the growth of Covid-19 pandemic so as to cure the disease through an accurate diagnosis and proper medication. To address this current issue, a Convolution neural network model (CNN) integrated to self-attention has been proposed. The convolution operator is limited to local receptive field being the disadvantage of CNN. So, we have incorporated self attention mechanism between image representations at deep layers so that the model could learn both local and long range dependencies of the image. Therefore, efficacy of the proposed model has been illustrated through the experimental results and had proven to be progressive in detecting Covid-19 infection by equipping the self-attention module to CNN architecture. © 2022 IEEE.

20.
Comput Biol Med ; 151(Pt A): 106306, 2022 Dec.
Article in English | MEDLINE | ID: covidwho-2104651

ABSTRACT

The outbreak of new coronary pneumonia has brought severe health risks to the world. Detection of COVID-19 based on the UNet network has attracted widespread attention in medical image segmentation. However, the traditional UNet model is challenging to capture the long-range dependence of the image due to the limitations of the convolution kernel with a fixed receptive field. The Transformer Encoder overcomes the long-range dependence problem. However, the Transformer-based segmentation approach cannot effectively capture the fine-grained details. We propose a transformer with a double decoder UNet for COVID-19 lesions segmentation to address this challenge, TDD-UNet. We introduce the multi-head self-attention of the Transformer to the UNet encoding layer to extract global context information. The dual decoder structure is used to improve the result of foreground segmentation by predicting the background and applying deep supervision. We performed quantitative analysis and comparison for our proposed method on four public datasets with different modalities, including CT and CXR, to demonstrate its effectiveness and generality in segmenting COVID-19 lesions. We also performed ablation studies on the COVID-19-CT-505 dataset to verify the effectiveness of the key components of our proposed model. The proposed TDD-UNet also achieves higher Dice and Jaccard mean scores and the lowest standard deviation compared to competitors. Our proposed method achieves better segmentation results than other state-of-the-art methods.


Subject(s)
COVID-19 , Communication Aids for Disabled , Humans , COVID-19/diagnostic imaging , Algorithms , Heart
SELECTION OF CITATIONS
SEARCH DETAIL